15 research outputs found

    Software Framework for Customized Augmented Reality Headsets in Medicine

    Get PDF
    The growing availability of self-contained and affordable augmented reality headsets such as the Microsoft HoloLens is encouraging the adoption of these devices also in the healthcare sector. However, technological and human-factor limitations still hinder their routine use in clinical practice. Among them, the major drawbacks are due to their general-purpose nature and to the lack of a standardized framework suited for medical applications and devoid of platform-dependent tracking techniques and/or complex calibration procedures. To overcome such limitations, in this paper we present a software framework that is designed to support the development of augmented reality applications for custom-made head-mounted displays designed to aid high-precision manual tasks. The software platform is highly configurable, computationally efficient, and it allows the deployment of augmented reality applications capable to support in situ visualization of medical imaging data. The framework can provide both optical and video see-through-based augmentations and it features a robust optical tracking algorithm. An experimental study was designed to assess the efficacy of the platform in guiding a simulated task of surgical incision. In the experiments, the user was asked to perform a digital incision task, with and without the aid of the augmented reality headset. The task accuracy was evaluated by measuring the similarity between the traced curve and the planned one. The average error in the augmented reality tests was < 1 mm. The results confirm that the proposed framework coupled with the new-concept headset may boost the integration of augmented reality headsets into routine clinical practice

    Augmented reality in open surgery

    Get PDF
    Augmented reality (AR) has been successfully providing surgeons an extensive visual information of surgical anatomy to assist them throughout the procedure. AR allows surgeons to view surgical field through the superimposed 3D virtual model of anatomical details. However, open surgery presents new challenges. This study provides a comprehensive overview of the available literature regarding the use of AR in open surgery, both in clinical and simulated settings. In this way, we aim to analyze the current trends and solutions to help developers and end/users discuss and understand benefits and shortcomings of these systems in open surgery. We performed a PubMed search of the available literature updated to January 2018 using the terms (1) “augmented reality” AND “open surgery”, (2) “augmented reality” AND “surgery” NOT “laparoscopic” NOT “laparoscope” NOT “robotic”, (3) “mixed reality” AND “open surgery”, (4) “mixed reality” AND “surgery” NOT “laparoscopic” NOT “laparoscope” NOT “robotic”. The aspects evaluated were the following: real data source, virtual data source, visualization processing modality, tracking modality, registration technique, and AR display type. The initial search yielded 502 studies. After removing the duplicates and by reading abstracts, a total of 13 relevant studies were chosen. In 1 out of 13 studies, in vitro experiments were performed, while the rest of the studies were carried out in a clinical setting including pancreatic, hepatobiliary, and urogenital surgeries. AR system in open surgery appears as a versatile and reliable tool in the operating room. However, some technological limitations need to be addressed before implementing it into the routine practice

    Real time event-based segmentation to classify locomotion activities through a single inertial sensor

    Get PDF
    We propose an event-based dynamic segmentation technique for the classification of locomotion activities, able to detect the mid-swing, initial contact and end contact events. This technique is based on the use of a shank-mounted inertial sensor incorporating a tri-axial accelerometer and a tri-axial gyroscope, and it is tested on four different locomotion activities: walking, stair ascent, stair descent and running. Gyroscope data along one component are used to dynamically determine the window size for segmentation, and a number of features are then extracted from these segments. The event-based segmentation technique has been compared against three different fixed window size segmentations, in terms of classification accuracy on two different datasets, and with two different feature sets. The dynamic event-based segmentation showed an improvement in terms of accuracy of around 5% (97% vs. 92% and 92% vs. 87%) and 1-2% (89% vs. 87% and 97% vs. 96%) for the two dataset, respectively, thus confirming the need to incorporate an event-based criterion to increase performance in the classification of motion activities

    Heart disease classification ensemble optimization using Genetic algorithm

    No full text
    Heart disease diagnosis is considered as one of the complicated tasks in medical field. In order to perform heart disease diagnosis an accurate and efficient automation system can be very helpful. In this research, we propose a classifier ensemble method to improve the decision of the classifiers for heart disease diagnosis. Homogeneous ensemble is applied for heart disease classification and finally results are optimized by using Genetic algorithm. Data is evaluated by using 10-fold cross validation and performance of the system is evaluated by classifiers accuracy, sensitivity and specificity to check the feasibility of our system. Comparison of our methodology with existing ensemble technique has shown considerable improvements in terms of classification accuracy

    Pre-Processing Effect on the Accuracy of Event-Based Activity Segmentation and Classification through Inertial Sensors

    No full text
    Inertial sensors are increasingly being used to recognize and classify physical activities in a variety of applications. For monitoring and fitness applications, it is crucial to develop methods able to segment each activity cycle, e.g., a gait cycle, so that the successive classification step may be more accurate. To increase detection accuracy, pre-processing is often used, with a concurrent increase in computational cost. In this paper, the effect of pre-processing operations on the detection and classification of locomotion activities was investigated, to check whether the presence of pre-processing significantly contributes to an increase in accuracy. The pre-processing stages evaluated in this study were inclination correction and de-noising. Level walking, step ascending, descending and running were monitored by using a shank-mounted inertial sensor. Raw and filtered segments, obtained from a modified version of a rule-based gait detection algorithm optimized for sequential processing, were processed to extract time and frequency-based features for physical activity classification through a support vector machine classifier. The proposed method accurately detected &gt;99% gait cycles from raw data and produced &gt;98% accuracy on these segmented gait cycles. Pre-processing did not substantially increase classification accuracy, thus highlighting the possibility of reducing the amount of pre-processing for real-time applications

    Varying behavior of different window sizes on the classification of static and dynamic physical activities from a single accelerometer

    No full text
    Accuracy of systems able to recognize in real time daily living activities heavily depends on the processing step for signal segmentation. So far, windowing approaches are used to segment data and the window size is usually chosen based on previous studies. However, literature is vague on the investigation of its effect on the obtained activity recognition accuracy, if both short and long duration activities are considered. In this work, we present the impact of window size on the recognition of daily living activities, where transitions between different activities are also taken into account. The study was conducted on nine participants who wore a tri-axial accelerometer on their waist and performed some short (sitting, standing, and transitions between activities) and long (walking, stair descending and stair ascending) duration activities. Five different classifiers were tested, and among the different window sizes, it was found that 1.5 s window size represents the best trade-off in recognition among activities, with an obtained accuracy well above 90%. Differences in recognition accuracy for each activity highlight the utility of developing adaptive segmentation criteria, based on the duration of the activities

    The influence of haptic feedback on hand movement regularity in elderly adults

    No full text
    "Eight elderly adults and eight young adults were requested to perform circular movements with the hand through a commercial haptic platform, under different conditions in an ecological setting: with visual feedback, and with a force field produced by the machine. Measures of kinematics and movement regularity (maximum velocity, duration, mean square jerk, and its normalized form) were captured to determine the effect of these feedbacks on hand kinematics. In the elderly group, regularity was lower when haptic feedback was given in combination with visual feedback as compared to providing haptic feedback alone. This effect appeared also in the group of young adults, and outlines the possibility that the ability to integrate different feedbacks may need more time to be learned, even if the feedbacks are generated to facilitate movements.

    Haptic Feedback Affects Movement Regularity of Upper Extremity Movements in Elderly Adults

    No full text
    Eight elderly adults were requested to perform circle movements with the hand through a commercial haptic platform, in two different conditions: with visual feedback, and with a facilitating force field produced by the machine. A measure of movement regularity (the mean square jerk in its normalized form) were captured to determine the effect of these feedbacks on hand kinematics. Regularity was higher when haptics feedback was given alone (MSJratio 6.48 +/- 0.15), as compared to combining it with visual feedback (MSJratio 7.46 +/- 0.18). We interpreted these differences as the ability to process visual information in trajectory tracking conditions as higher than the one to cope with external force fields, also when provided as a hypothetically facilitating one
    corecore